159 research outputs found

    Searching for New Physics using Classical and Quantum Machine Learning

    Get PDF
    The development of machine learning (ML) has provided the High Energy Physics (HEP) community with new methods of analysing collider and Monte-Carlo generated data. As experiments are upgraded to generate an increasing number of events, classical techniques can be supplemented with ML to increase our ability to find signs of New Physics in the high-dimensional event data. This thesis presents three methods of performing supervised and unsupervised searches using novel ML methods. The first depends on the use of an autoencoder to perform an unsupervised anomaly detection search. We demonstrate that this method allows you to carry out a data-driven, model-independent search for New Physics. Furthermore, we show that by extending the model with an adversary we can account for systematic errors that may arise from experiments. The second method develops a form of quantum machine learning to be applied to a supervised search. Using a variational quantum classifier (a neural network style model built from quantum information principles) we demonstrate a quantum advantage arises when compared to a classical network. Finally, we make use of the continuous-variable (CV) paradigm of quantum computing to build an unsupervised method of classifying events stored as graph data. Gaussian boson sampling provides an example of a quantum advantage unique to the CV method of quantum computing and allows our events to be used in an anomaly detector model built using the Q-means clustering algorithm

    Next generation of consumer aerosol valve design using inert gases

    Get PDF
    The current global consumer aerosol products such as deodorants, hairsprays, air-fresheners, polish, insecticide, disinfectant are primarily utilised unfriendly environmental propellant of liquefied petroleum gas (LPG) for over three decades. The advantages of the new innovative technology described in this paper are: (i) no butane or other liquefied hydrocarbon gas; (ii) compressed air, nitrogen or other safe gas propellant; (iii) customer acceptable spray quality and consistency during can lifetime; (iv) conventional cans and filling technology. Volatile organic compounds and greenhouse gases must be avoided but there are no flashing propellants replacements that would provide the good atomisation and spray reach. On the basis of the energy source for atomising, the only feasible source is inert gas (i.e. compressed air), which improves atomisation by gas bubbles and turbulence inside the atomiser insert of the actuator. This research concentrates on using ‘bubbly flow’ in the valve stem, with injection of compressed gas into the passing flow, thus also generating turbulence. Using a vapour phase tap in conventional aerosol valves allows the propellant gas into the liquid flow upstream of the valve. However, forcing bubbly flow through a valve is not ideal. The novel valves designed here, using compressed gas, thus achieved the following objectives when the correct combination of gas and liquid inlets to the valve, and the type and size of atomiser ‘insert’ were derived: 1. Produced a consistent flow rate and drop size of spray throughout the life of the can, compatible with the current conventional aerosols that use LPG: a new ‘constancy’ parameter is defined and used to this end. 2. Obtained a discharge flow rate suited to the product to be sprayed; typically between 0.4 g/s and 2.5 g/s. 3. Attained the spray droplets size suited to the product to be sprayed; typically between 40 mm and 120 mm

    MTrainS: Improving DLRM training efficiency using heterogeneous memories

    Full text link
    Recommendation models are very large, requiring terabytes (TB) of memory during training. In pursuit of better quality, the model size and complexity grow over time, which requires additional training data to avoid overfitting. This model growth demands a large number of resources in data centers. Hence, training efficiency is becoming considerably more important to keep the data center power demand manageable. In Deep Learning Recommendation Models (DLRM), sparse features capturing categorical inputs through embedding tables are the major contributors to model size and require high memory bandwidth. In this paper, we study the bandwidth requirement and locality of embedding tables in real-world deployed models. We observe that the bandwidth requirement is not uniform across different tables and that embedding tables show high temporal locality. We then design MTrainS, which leverages heterogeneous memory, including byte and block addressable Storage Class Memory for DLRM hierarchically. MTrainS allows for higher memory capacity per node and increases training efficiency by lowering the need to scale out to multiple hosts in memory capacity bound use cases. By optimizing the platform memory hierarchy, we reduce the number of nodes for training by 4-8X, saving power and cost of training while meeting our target training performance

    Standardized reporting of the costs of management interventions for biodiversity conservation

    Get PDF
    Effective conservation management interventions must combat threats and deliver conservation benefits at costs that can be achieved within limited budgets. Considerable effort has focused on measuring the potential benefits of conservation interventions but explicit quantification of implementation costs has been rare. Even when costs have been quantified, haphazard and inconsistent reporting means that published values are difficult to interpret. This reporting deficiency hinders progress towards building a collective understanding of the costs of management interventions across projects, and thus limits our ability to identify efficient solutions to conservation problems or attract adequate funding. We address this challenge by proposing a standardized approach to describing costs reported for conservation interventions. These standards call for researchers and practitioners to ensure the cost data they collect and report on provide enough contextual information that readers and future users can interpret the data appropriately. We suggest these standards be adopted by major conservation organizations, conservation science institutions, and journals, so that cost reporting is comparable between studies. This would support shared learning and enhance our ability to identify and perform cost-effective conservation.Funding was provided by CEED (GDI and workshop), an ARC Laureate Fellowship (HPP, BM, VA and JM), Arcadia (WJS), the Natural Environment Research Council (LVD, NE/K015419/1; NE/N014472/1) and the Wildlife Conservation Society (AJP)
    • …
    corecore